Morphological neurons, that is morphological operators such as dilation and erosion with learnable structuring elements, have intrigued researchers for quite some time because of the power these operators bring to the table despite their simplicity. These operators are known to be powerful nonlinear tools, but for a given problem coming up with a sequence of operations and their structuring element is a non-trivial task. So, the existing works have mainly focused on this part of the problem without delving deep into their applicability as generic operators. A few works have tried to utilize morphological neurons as a part of classification (and regression) networks when the input is a feature vector. However, these methods mainly focus on a specific problem, without going into generic theoretical analysis. In this work, we have theoretically analyzed morphological neurons and have shown that these are far more powerful than previously anticipated. Our proposed morphological block, containing dilation and erosion followed by their linear combination, represents a sum of hinge functions. Existing works show that hinge functions perform quite well in classification and regression problems. Two morphological blocks can even approximate any continuous function. However, to facilitate the theoretical analysis that we have done in this paper, we have restricted ourselves to the 1D version of the operators, where the structuring element operates on the whole input. Experimental evaluations also indicate the effectiveness of networks built with morphological neurons, over similarly structured neural networks.
translated by 谷歌翻译
Human activity recognition (HAR) using IMU sensors, namely accelerometer and gyroscope, has several applications in smart homes, healthcare and human-machine interface systems. In practice, the IMU-based HAR system is expected to encounter variations in measurement due to sensor degradation, alien environment or sensor noise and will be subjected to unknown activities. In view of practical deployment of the solution, analysis of statistical confidence over the activity class score are important metrics. In this paper, we therefore propose XAI-BayesHAR, an integrated Bayesian framework, that improves the overall activity classification accuracy of IMU-based HAR solutions by recursively tracking the feature embedding vector and its associated uncertainty via Kalman filter. Additionally, XAI-BayesHAR acts as an out of data distribution (OOD) detector using the predictive uncertainty which help to evaluate and detect alien input data distribution. Furthermore, Shapley value-based performance of the proposed framework is also evaluated to understand the importance of the feature embedding vector and accordingly used for model compression
translated by 谷歌翻译
Natural Language Generation (NLG) represents a large collection of tasks in the field of NLP. While many of these tasks have been tackled well by the cross-entropy (CE) loss, the task of dialog generation poses a few unique challenges for this loss function. First, CE loss assumes that for any given input, the only possible output is the one available as the ground truth in the training dataset. In general, this is not true for any task, as there can be multiple semantically equivalent sentences, each with a different surface form. This problem gets exaggerated further for the dialog generation task, as there can be multiple valid responses (for a given context) that not only have different surface forms but are also not semantically equivalent. Second, CE loss does not take the context into consideration while processing the response and, hence, it treats all ground truths with equal importance irrespective of the context. But, we may want our final agent to avoid certain classes of responses (e.g. bland, non-informative or biased responses) and give relatively higher weightage for more context-specific responses. To circumvent these shortcomings of the CE loss, in this paper, we propose a novel loss function, CORAL, that directly optimizes recently proposed estimates of human preference for generated responses. Using CORAL, we can train dialog generation models without assuming non-existence of response other than the ground-truth. Also, the CORAL loss is computed based on both the context and the response. Extensive comparisons on two benchmark datasets show that the proposed methods outperform strong state-of-the-art baseline models of different sizes.
translated by 谷歌翻译